157 research outputs found

    From Many-Valued Consequence to Many-Valued Connectives

    Full text link
    Given a consequence relation in many-valued logic, what connectives can be defined? For instance, does there always exist a conditional operator internalizing the consequence relation, and which form should it take? In this paper, we pose this question in a multi-premise multi-conclusion setting for the class of so-called intersective mixed consequence relations, which extends the class of Tarskian relations. Using computer-aided methods, we answer extensively for 3-valued and 4-valued logics, focusing not only on conditional operators, but on what we call Gentzen-regular connectives (including negation, conjunction, and disjunction). For arbitrary N-valued logics, we state necessary and sufficient conditions for the existence of such connectives in a multi-premise multi-conclusion setting. The results show that mixed consequence relations admit all classical connectives, and among them pure consequence relations are those that admit no other Gentzen-regular connectives. Conditionals can also be found for a broader class of intersective mixed consequence relations, but with the exclusion of order-theoretic consequence relations.Comment: Updated version [corrections of an incorrect claim in first version; two bib entries added

    Suszko's Problem: Mixed Consequence and Compositionality

    Full text link
    Suszko's problem is the problem of finding the minimal number of truth values needed to semantically characterize a syntactic consequence relation. Suszko proved that every Tarskian consequence relation can be characterized using only two truth values. Malinowski showed that this number can equal three if some of Tarski's structural constraints are relaxed. By so doing, Malinowski introduced a case of so-called mixed consequence, allowing the notion of a designated value to vary between the premises and the conclusions of an argument. In this paper we give a more systematic perspective on Suszko's problem and on mixed consequence. First, we prove general representation theorems relating structural properties of a consequence relation to their semantic interpretation, uncovering the semantic counterpart of substitution-invariance, and establishing that (intersective) mixed consequence is fundamentally the semantic counterpart of the structural property of monotonicity. We use those to derive maximum-rank results proved recently in a different setting by French and Ripley, as well as by Blasio, Marcos and Wansing, for logics with various structural properties (reflexivity, transitivity, none, or both). We strengthen these results into exact rank results for non-permeable logics (roughly, those which distinguish the role of premises and conclusions). We discuss the underlying notion of rank, and the associated reduction proposed independently by Scott and Suszko. As emphasized by Suszko, that reduction fails to preserve compositionality in general, meaning that the resulting semantics is no longer truth-functional. We propose a modification of that notion of reduction, allowing us to prove that over compact logics with what we call regular connectives, rank results are maintained even if we request the preservation of truth-functionality and additional semantic properties.Comment: Keywords: Suszko's thesis; truth value; logical consequence; mixed consequence; compositionality; truth-functionality; many-valued logic; algebraic logic; substructural logics; regular connective

    What do monkey calls mean?

    Get PDF
    Grant acknowledgements: Chemla and Schlenker: Research by Schlenker and Chemla was conducted at Institut d’Etudes Cognitives, Ecole Normale SupĂ©rieure – PSL Research University. Institut d’Etudes Cognitives is supported by grants ANR-10-LABX-0087 IEC et ANR-10-IDEX-0001-02 PSL. Schlenker: The research leading to these results received funding from the European Research Coucil under the European Union’s Seventh Framework Programme (FP/2007- 2013) / ERC Grant Agreement n°324115-FRONTSEM (PI:Schlenker). ZuberbĂŒhler: The research leading to these results received funding from the European Research Council under ERC grant ‘Prilang 283871’ and also from the Swiss National Science Foundation under grant ‘FN 310030_143359/1’. The project also benefited from the support of the Centre Suisse de Recherches Scientifiques en CĂŽte d'Ivoire and TaĂŻ Monkey Project.A field of primate linguistics is gradually emerging. It combines general questions and tools from theoretical linguistics with rich data gathered in experimental primatology. Analyses of several monkey systems have uncovered very simple morphological and syntactic rules, and they have led to the development of a primate semantics which asks new questions about the division of semantic labor between the literal meaning of monkey calls, additional mechanisms of pragmatic enrichment, and the environmental context. We show that comparative studies across species may validate this program, and may in some cases help reconstruct the evolution of monkey communication over millions of years.PostprintPeer reviewe

    Mouse tracking as a window into decision making

    Get PDF
    International audienceMouse tracking promises to be an efficient method to investigate the dynamics of cognitive processes: It is easier to deploy than eyetracking, yet in principle it is much more fine-grained than looking at response times. We investigated these claimed benefits directly, asking how the features of decision processes—notably, decision changes—might be captured in mouse movements. We ran two experiments, one in which we explicitly manipulated whether our stimuli triggered a flip in decision, and one in which we replicated more ecological, classical mouse-tracking results on linguistic negation (Dale & Duran, Cognitive Science, 35, 983–996, 2011). We concluded, first, that spatial information (mouse path) is more important than temporal information (speed and acceleration) for detecting decision changes, and we offer a comparison of the sensitivities of various typical measures used in analyses of mouse tracking (area under the trajectory curve, direction flips, etc.). We do so using an “optimal” analysis of our data (a linear discriminant analysis explicitly trained to classify trajectories) and see what type of data (position, speed, or acceleration) it capitalizes on. We also quantify how its results compare with those based on more standard measures

    Connecting content and logical words

    Get PDF

    Benchmarking Neural Network Generalization for Grammar Induction

    Full text link
    How well do neural networks generalize? Even for grammar induction tasks, where the target generalization is fully known, previous works have left the question open, testing very limited ranges beyond the training set and using different success criteria. We provide a measure of neural network generalization based on fully specified formal languages. Given a model and a formal grammar, the method assigns a generalization score representing how well a model generalizes to unseen samples in inverse relation to the amount of data it was trained on. The benchmark includes languages such as anbna^nb^n, anbncna^nb^nc^n, anbmcn+ma^nb^mc^{n+m}, and Dyck-1 and 2. We evaluate selected architectures using the benchmark and find that networks trained with a Minimum Description Length objective (MDL) generalize better and using less data than networks trained using standard loss functions. The benchmark is available at https://github.com/taucompling/bliss.Comment: 10 pages, 4 figures, 2 tables. Conference: Learning with Small Data 202

    Minimum Description Length Hopfield Networks

    Full text link
    Associative memory architectures are designed for memorization but also offer, through their retrieval method, a form of generalization to unseen inputs: stored memories can be seen as prototypes from this point of view. Focusing on Modern Hopfield Networks (MHN), we show that a large memorization capacity undermines the generalization opportunity. We offer a solution to better optimize this tradeoff. It relies on Minimum Description Length (MDL) to determine during training which memories to store, as well as how many of them.Comment: 4 pages, Associative Memory & Hopfield Networks Workshop at NeurIPS202

    Shared and distinct mechanisms in deriving linguistic enrichment

    Get PDF
    Meanings of basic expressions can be enriched by considering what the speaker could have said, but chose not to, that is, the alternatives. We report three priming experiments that test whether there are shared enrichment mechanisms across a diverse range of linguistic categories. We find that quantifier, number, and ad hoc enrichments exhibit robust priming within their categories and between each other. Plural enrichments, in contrast, demonstrate within-category priming but no between-category priming. Our results demonstrate that (1) enrichment typically thought of as pragmatic or semantic can be primed in the same way as syntactic structures, and (2) there are mechanisms that are shared across different enrichment categories, and that some phenomena (e.g., plurals) are excluded from this class. We discuss the implications of our findings for psychological models of enrichment, theories of individual categories of enrichment, and structural priming
    • 

    corecore